674 results on '"optimal experimental design"'
Search Results
2. Robust design for mixture experiments: an efficient class of exchangeable designs for Scheffé polynomials.
- Author
-
García Camacha Gutiérrez, Irene, Martín Martín, Raúl, Polo Sanz, José-Luis, and Sebastià Bargues, Àngela
- Abstract
AbstractModern industry, engineering, and science seek to explore new methods to determine the composition that optimally describes certain product features. The goal of mixture design is to determine the proportions that optimize some property of the response. Most practitioners adopt a classical design approach since they vaguely know the model form prior to running their experiments. This work is focused on finding optimal designs for Scheffé polynomials where the parametric description of the response function may be inadequate. Our purpose is to obtain an analytical solution to the continuous problem (when possible) and to otherwise provide a numerical alternative. Theoretical results are proven for binary blends, while other strategies based on imposing restrictions and numerical approaches are provided for ternary blends. A real example illustrates the favorability of the results for a three-salt mixture setup. The geometrical properties observed in the designs encouraged the investigation of a class of
exchangeable designs . The results reveal that this class of restricted designs may be widely recommended for their simplicity, high performance, and computational efficiency. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
3. Expert‐in‐the‐loop design of integral nuclear data experiments.
- Author
-
Michaud, Isaac, Grosskopf, Michael, Hutchinson, Jesson, and Vander Wiel, Scott
- Subjects
- *
OPTIMAL designs (Statistics) , *INTEGRALS , *EXPERIMENTAL design , *NUCLEAR reactor safety measures - Abstract
Nuclear data are fundamental inputs to radiation transport codes used for reactor design and criticality safety. The design of experiments to reduce nuclear data uncertainty has been a challenge for many years, but advances in the sensitivity calculations of radiation transport codes within the last two decades have made optimal experimental design possible. The design of integral nuclear experiments poses numerous challenges not emphasized in classical optimal design, in particular, constrained design spaces (in both a statistical and engineering sense), severely under‐determined systems, and optimality uncertainty. We present a design pipeline to optimize critical experiments that uses constrained Bayesian optimization within an iterative expert‐in‐the‐loop framework. We show a successfully completed experiment campaign designed with this framework that involved two critical configurations and multiple measurements that targeted compensating errors in 239Pu nuclear data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. I-optimal or G-optimal: Do we have to choose?
- Author
-
Walsh, Stephen J., Lu, Lu, and Anderson-Cook, Christine M.
- Subjects
PARETO analysis ,PARTICLE swarm optimization ,RESPONSE surfaces (Statistics) ,OPTIMAL designs (Statistics) - Abstract
When optimizing an experimental design for good prediction performance based on an assumed second order response surface model, it is common to focus on a single optimality criterion, either G-optimality, for best worst-case prediction precision, or I-optimality, for best average prediction precision. In this article, we illustrate how using particle swarm optimization to construct a Pareto front of non-dominated designs that balance these two criteria yields some highly desirable results. In most scenarios, there are designs that simultaneously perform well for both criteria. Seeing alternative designs that vary how they balance the performance of G- and I-efficiency provides experimenters with choices that allow selection of a better match for their study objectives. We provide an extensive repository of Pareto fronts with designs for 17 common experimental scenarios for 2 (design size N = 6 to 12), 3 (N = 10 to 16) and 4 (N = 15, 17, 20) experimental factors. These, when combined with a detailed strategy for how to efficiently analyze, assess, and select between alternatives, provide the reader with the tools to select the ideal design with a tailored balance between G- and I-optimality for their own experimental situations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Sensitivity Analysis of Genome-Scale Metabolic Flux Prediction.
- Author
-
Niu, Puhua, Soto, Maria J, Huang, Shuai, Yoon, Byung-Jun, Dougherty, Edward R, Alexander, Francis J, Blaby, Ian, and Qian, Xiaoning
- Subjects
Bayes Theorem ,Models ,Biological ,Gene Regulatory Networks ,Metabolic Networks and Pathways ,Metabolic Flux Analysis ,Bayesian network structure learning ,metabolic engineering ,optimal experimental design ,regulated metabolic network modeling ,uncertainty quantification ,Genetics ,Human Genome ,Mathematical Sciences ,Biological Sciences ,Information and Computing Sciences ,Bioinformatics - Abstract
TRIMER, Transcription Regulation Integrated with MEtabolic Regulation, is a genome-scale modeling pipeline targeting at metabolic engineering applications. Using TRIMER, regulated metabolic reactions can be effectively predicted by integrative modeling of metabolic reactions with a transcription factor-gene regulatory network (TRN), which is modeled through a Bayesian network (BN). In this article, we focus on sensitivity analysis of metabolic flux prediction for uncertainty quantification of BN structures for TRN modeling in TRIMER. We propose a computational strategy to construct the uncertainty class of TRN models based on the inferred regulatory order uncertainty given transcriptomic expression data. With that, we analyze the prediction sensitivity of the TRIMER pipeline for the metabolite yields of interest. The obtained sensitivity analyses can guide optimal experimental design (OED) to help acquire new data that can enhance TRN modeling and achieve specific metabolic engineering objectives, including metabolite yield alterations. We have performed small- and large-scale simulated experiments, demonstrating the effectiveness of our developed sensitivity analysis strategy for BN structure learning to quantify the edge importance in terms of metabolic flux prediction uncertainty reduction and its potential to effectively guide OED.
- Published
- 2023
6. Optimal Sensor Placement for Developing Reliable Digital Twins of Structures
- Author
-
Ercan, Tulay, Papadimitriou, Costas, Zimmerman, Kristin B., Series Editor, Platz, Roland, editor, Flynn, Garrison, editor, Neal, Kyle, editor, and Ouellette, Scott, editor
- Published
- 2024
- Full Text
- View/download PDF
7. Statistical modeling of fully nonlinear hydrodynamic loads on offshore wind turbine monopile foundations using wave episodes and targeted CFD simulations through active sampling
- Author
-
Stephen Guth, Eirini Katsidoniotaki, and Themistoklis P. Sapsis
- Subjects
active sampling ,heavy tails and extreme events ,offshore structures ,optimal experimental design ,reduced order modeling ,wave episodes ,Renewable energy sources ,TJ807-830 - Abstract
Abstract Accurately determining hydrodynamic force statistics is crucial for designing offshore engineering structures, including offshore wind turbine foundations, due to the significant impact of nonlinear wave–structure interactions. However, obtaining precise load statistics often involves computationally intensive simulations. Furthermore, the estimation of statistics using current practices is subject to ongoing discussion due to the inherent uncertainty involved. To address these challenges, we present a novel machine learning framework that leverages data‐driven surrogate modeling to predict hydrodynamic loads on monopile foundations while reducing reliance on costly simulations and facilitate the load statistics reconstruction. The primary advantage of our approach is the significant reduction in evaluation time compared to traditional modeling methods. The novelty of our framework lies in its efficient construction of the surrogate model, utilizing the Gaussian process regression machine learning technique and a Bayesian active learning method to sequentially sample wave episodes that contribute to accurate predictions of extreme hydrodynamic forces. Additionally, a spectrum transfer technique combines computational fluid dynamics (CFD) results from both quiescent and extreme waves, further reducing data requirements. This study focuses on reducing the dimensionality of stochastic irregular wave episodes and their associated hydrodynamic force time series. Although the dimensionality reduction is linear, Gaussian process regression successfully captures high‐order correlations. Furthermore, our framework incorporates built‐in uncertainty quantification capabilities, facilitating efficient parameter sampling using traditional CFD tools. This paper provides comprehensive implementation details and demonstrates the effectiveness of our approach in delivering reliable statistics for hydrodynamic loads while overcoming the computational cost constraints associated with classical modeling methods.
- Published
- 2024
- Full Text
- View/download PDF
8. Optimal experimental design for precise parameter estimation in competitive cross-reaction equilibria.
- Author
-
Zade, Somaye Vali and Abdollahi, Hamid
- Subjects
- *
OPTIMAL designs (Statistics) , *PARAMETER estimation , *CHEMICAL equilibrium , *CROSS reactions (Immunology) , *CHEMICAL affinity - Abstract
Precise and accurate determination of equilibrium constants in chemical systems plays a crucial role in understanding their behavior, predicting reactions, and optimizing processes, leading to advancements in materials science, pharmaceuticals, and environmental studies. Various procedures, such as the mole ratio method, continuous variation, and titration, are commonly employed to investigate equilibrium systems and calculate equilibrium constants involving host–guest complexes. However, the impact of different experimental designs on the accuracy and precision of the fitted parameters has not been extensively explored. In this study, we focus on the role of experimental design in monitoring chemical equilibria and its influence on model parameter estimation. The indicator displacement assay and guest displacement assay are investigated as typical monitoring methods, and the optimal experimental design approach is introduced to identify suitable probe systems. The global analysis is also explored to improve parameter estimation by simultaneously fitting multiple datasets. Furthermore, non-regular procedures are examined, demonstrating the potential for robust model fitting with minimal measurements and providing insights into selecting appropriate probe systems. The results indicate that with appropriate design for monitoring equilibrium positions in competitive reactions, the probe equilibrium can differ by several orders of magnitude (more than 3 units) from the target equilibrium constant. Yet, accurate and precise determination of the equilibrium constant can still be achieved. The findings shed light on the importance of optimal experimental design for accurately determining thermodynamic parameters and binding affinities in complex chemical systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Robust online active learning.
- Author
-
Cacciarelli, Davide, Kulahci, Murat, and Tyssedal, John Sølve
- Subjects
- *
ACTIVE learning , *ONLINE education , *OPTIMAL designs (Statistics) , *LEARNING strategies , *DEEP learning , *MANUFACTURING processes - Abstract
In many industrial applications, obtaining labeled observations is not straightforward as it often requires the intervention of human experts or the use of expensive testing equipment. In these circumstances, active learning can be highly beneficial in suggesting the most informative data points to be used when fitting a model. Reducing the number of observations needed for model development alleviates both the computational burden required for training and the operational expenses related to labeling. Online active learning, in particular, is useful in high‐volume production processes where the decision about the acquisition of the label for a data point needs to be taken within an extremely short time frame. However, despite the recent efforts to develop online active learning strategies, the behavior of these methods in the presence of outliers has not been thoroughly examined. In this work, we investigate the performance of online active linear regression in contaminated data streams. Our study shows that the currently available query strategies are prone to sample outliers, whose inclusion in the training set eventually degrades the predictive performance of the models. To address this issue, we propose a solution that bounds the search area of a conditional D‐optimal algorithm and uses a robust estimator. Our approach strikes a balance between exploring unseen regions of the input space and protecting against outliers. Through numerical simulations, we show that the proposed method is effective in improving the performance of online active learning in the presence of outliers, thus expanding the potential applications of this powerful tool. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Information-driven optimal experimental design with deep neural network surrogate model for composite materials.
- Author
-
Jang, Kyung Suk and Yun, Gun Jin
- Subjects
- *
ARTIFICIAL neural networks , *OPTIMAL designs (Statistics) , *COMPOSITE materials , *FINITE element method - Abstract
The optimal experimental design (OED) provides informative experimental resources and reduces the inherent uncertainty of the experiment. The OED is a framework to maximize performed experiments by controlling design variables. This paper proposes the OED framework combined with deep neural network (DNN) surrogate model to find the optimal design over large and complex design space. The OED has the optimization process to maximize the conditional mutual information (CMI) between hidden properties and composite structural performances. Approximate coordinate exchange (ACE) algorithm was applied to find the optimal design over a high and complicated design space. For fast optimization, surrogate DNN model was used instead of finite element analysis. Two examples were executed for the demonstration of the OED frameworks. From the results of each demonstration, OED provided the results that are consistent with heuristic knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Statistical modeling of fully nonlinear hydrodynamic loads on offshore wind turbine monopile foundations using wave episodes and targeted CFD simulations through active sampling.
- Author
-
Guth, Stephen, Katsidoniotaki, Eirini, and Sapsis, Themistoklis P.
- Subjects
NONLINEAR statistical models ,WIND pressure ,KRIGING ,ESTIMATION theory ,COMPUTATIONAL fluid dynamics ,WIND turbine blades ,WIND turbines - Abstract
Accurately determining hydrodynamic force statistics is crucial for designing offshore engineering structures, including offshore wind turbine foundations, due to the significant impact of nonlinear wave–structure interactions. However, obtaining precise load statistics often involves computationally intensive simulations. Furthermore, the estimation of statistics using current practices is subject to ongoing discussion due to the inherent uncertainty involved. To address these challenges, we present a novel machine learning framework that leverages data‐driven surrogate modeling to predict hydrodynamic loads on monopile foundations while reducing reliance on costly simulations and facilitate the load statistics reconstruction. The primary advantage of our approach is the significant reduction in evaluation time compared to traditional modeling methods. The novelty of our framework lies in its efficient construction of the surrogate model, utilizing the Gaussian process regression machine learning technique and a Bayesian active learning method to sequentially sample wave episodes that contribute to accurate predictions of extreme hydrodynamic forces. Additionally, a spectrum transfer technique combines computational fluid dynamics (CFD) results from both quiescent and extreme waves, further reducing data requirements. This study focuses on reducing the dimensionality of stochastic irregular wave episodes and their associated hydrodynamic force time series. Although the dimensionality reduction is linear, Gaussian process regression successfully captures high‐order correlations. Furthermore, our framework incorporates built‐in uncertainty quantification capabilities, facilitating efficient parameter sampling using traditional CFD tools. This paper provides comprehensive implementation details and demonstrates the effectiveness of our approach in delivering reliable statistics for hydrodynamic loads while overcoming the computational cost constraints associated with classical modeling methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Optimal Follow Up Designs for Fractional Partial Differential Equations with Application to a Convection-Advection Model.
- Author
-
Boone, Edward L., Ghanam, Ryad A., and Lee, Albert H.
- Subjects
PARTIAL differential equations ,FRACTIONAL differential equations ,ADVECTION-diffusion equations ,OPTIMAL designs (Statistics) ,RESEARCH personnel - Abstract
As the mathematical properties of Fractional Partial Differential Equations are rapidly being developed, there is an increasing desire by researchers to employ these models in real world data oriented contexts. The main barrier to employing these models is the choice of the fractional order a. Recently, [6] show how to both estimate and make inferences about a from a Bayesian perspective. However, for experimental settings one needs to be able to design experiments that will provide optimal information about a. This work demonstrates how to use information based criteria, namely A, D and E optimality, to choose sampling locations in follow-up designs. Specifically, the simultaneous addition of one, two and three measurement locations is considered for a simple example. The simultaneous addition of four and five measurement locations is also considered across a variety of values of a. The results show that each of the criteria provide different optimal measurement locations as the number of additional measurement locations is increased. Indicating that the choice of optimality criteria should not be decided arbitrarily. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Optimal study designs for cluster randomised trials: An overview of methods and results.
- Author
-
Watson, Samuel I, Girling, Alan, and Hemming, Karla
- Subjects
- *
OPTIMIZATION algorithms , *COMBINATORIAL optimization , *EXPERIMENTAL literature , *OPTIMAL designs (Statistics) , *EXPERIMENTAL design - Abstract
There are multiple possible cluster randomised trial designs that vary in when the clusters cross between control and intervention states, when observations are made within clusters, and how many observations are made at each time point. Identifying the most efficient study design is complex though, owing to the correlation between observations within clusters and over time. In this article, we present a review of statistical and computational methods for identifying optimal cluster randomised trial designs. We also adapt methods from the experimental design literature for experimental designs with correlated observations to the cluster trial context. We identify three broad classes of methods: using exact formulae for the treatment effect estimator variance for specific models to derive algorithms or weights for cluster sequences; generalised methods for estimating weights for experimental units; and, combinatorial optimisation algorithms to select an optimal subset of experimental units. We also discuss methods for rounding experimental weights, extensions to non-Gaussian models, and robust optimality. We present results from multiple cluster trial examples that compare the different methods, including determination of the optimal allocation of clusters across a set of cluster sequences and selecting the optimal number of single observations to make in each cluster-period for both Gaussian and non-Gaussian models, and including exchangeable and exponential decay covariance structures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. 基于数据同化的土壤水力参数反演方法:研究进展与 展望.
- Author
-
满 俊, 张江江, 郑 强, 尧一骏, and 曾令藻
- Subjects
OPTIMAL designs (Statistics) ,MACHINE learning ,SOILS - Published
- 2023
- Full Text
- View/download PDF
15. Robust optimal designs using a model misspecification term.
- Author
-
Tsirpitzi, Renata Eirini, Miller, Frank, and Burman, Carl-Fredrik
- Subjects
- *
FIXED effects model , *RANDOM effects model , *OPTIMAL designs (Statistics) , *STOCHASTIC processes , *STOCHASTIC models - Abstract
Much of classical optimal design theory relies on specifying a model with only a small number of parameters. In many applications, such models will give reasonable approximations. However, they will often be found not to be entirely correct when enough data are at hand. A property of classical optimal design methodology is that the amount of data does not influence the design when a fixed model is used. However, it is reasonable that a low dimensional model is satisfactory only if limited data is available. With more data available, more aspects of the underlying relationship can be assessed. We consider a simple model that is not thought to be fully correct. The model misspecification, that is, the difference between the true mean and the simple model, is explicitly modeled with a stochastic process. This gives a unified approach to handle situations with both limited and rich data. Our objective is to estimate the combined model, which is the sum of the simple model and the assumed misspecification process. In our situation, the low-dimensional model can be viewed as a fixed effect and the misspecification term as a random effect in a mixed-effects model. Our aim is to predict within this model. We describe how we minimize the prediction error using an optimal design. We compute optimal designs for the full model in different cases. The results confirm that the optimal design depends strongly on the sample size. In low-information situations, traditional optimal designs for models with a small number of parameters are sufficient, while the inclusion of the misspecification term lead to very different designs in data-rich cases. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. A model-based approach to designing developmental toxicology experiments using sea urchin embryos.
- Author
-
Collins, Michael D, Cui, Elvis Han, Hyun, Seung Won, and Wong, Weng Kee
- Subjects
Embryo ,Nonmammalian ,Animals ,Sea Urchins ,Trimethoprim ,Anti-Infective Agents ,Toxicology ,Embryonic Development ,Dose-Response Relationship ,Drug ,Research Design ,Approximate design ,D-optimality ,Dose–response ,Optimal experimental design ,Sea urchin embryo ,Generic health relevance ,Dose-response ,Pharmacology and Pharmaceutical Sciences - Abstract
The key aim of this paper is to suggest a more quantitative approach to designing a dose-response experiment, and more specifically, a concentration-response experiment. The work proposes a departure from the traditional experimental design to determine a dose-response relationship in a developmental toxicology study. It is proposed that a model-based approach to determine a dose-response relationship can provide the most accurate statistical inference for the underlying parameters of interest, which may be estimating one or more model parameters or pre-specified functions of the model parameters, such as lethal dose, at maximal efficiency. When the design criterion or criteria can be determined at the onset, there are demonstrated efficiency gains using a more carefully selected model-based optimal design as opposed to an ad-hoc empirical design. As an illustration, a model-based approach was theoretically used to construct efficient designs for inference in a developmental toxicity study of sea urchin embryos exposed to trimethoprim. This study compares and contrasts the results obtained using model-based optimal designs versus an ad-hoc empirical design.
- Published
- 2022
17. A Hybrid Queueing Search and Gradient-Based Algorithm for Optimal Experimental Design
- Author
-
Zhang, Yue, Zhai, Yi, Xia, Zhenyang, Wang, Xinlong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Premaratne, Prashan, editor, Jin, Baohua, editor, Qu, Boyang, editor, Jo, Kang-Hyun, editor, and Hussain, Abir, editor
- Published
- 2023
- Full Text
- View/download PDF
18. Bayesian Optimal Experimental Design for Inferring Causal Structure.
- Author
-
Zemplenyi, Michele and Miller, Jeffrey W.
- Subjects
BAYESIAN analysis ,ENTROPY ,ACTIVE learning ,GENE regulatory networks ,EXPERIMENTAL design - Abstract
Inferring the causal structure of a system typically requires interventional data, rather than just observational data. Since interventional experiments can be costly, it is preferable to select interventions that yield the maximum amount of information about a system. We propose a novel Bayesian method for optimal experimental design by sequentially selecting interventions that minimize the expected posterior entropy as rapidly as possible. A key feature is that the method can be implemented by computing simple summaries of the current posterior, avoiding the computationally burdensome task of repeatedly performing posterior inference on hypothetical future datasets drawn from the posterior predictive. After deriving the method in a general setting, we apply it to the problem of inferring causal networks. We present a series of simulation studies, in which we find that the proposed method performs favorably compared to existing alternative methods. Finally, we apply the method to real data from two gene regulatory networks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. A Genetic Algorithm-Enhanced Sensor Marks Selection Algorithm for Wavefront Aberration Modeling in Extreme-UV (EUV) Photolithography.
- Author
-
Magklaras, Aris, Alefragis, Panayiotis, Gogos, Christos, Valouxis, Christos, and Birbas, Alexios
- Subjects
- *
OPTIMAL designs (Statistics) , *PHOTOLITHOGRAPHY , *ALGORITHMS , *DETECTORS , *MACHINE performance - Abstract
In photolithographic processes, nanometer-level-precision wavefront-aberration models enable the machine to be able to meet the overlay (OVL) drift and critical dimension (CD) specifications. Software control algorithms take as input these models and correct any expected wavefront imperfections before reaching the wafer. In such way, a near-optimal image is exposed on the wafer surface. Optimizing the parameters of these models, however, involves several time costly sensor measurements which reduce the throughput performance of the machine in terms of exposed wafers per hour. In that case, photolithography machines come across the trade-off between throughput and quality. Therefore, one of the most common optimal experimental design (OED) problems in photolithography machines (and not only) is how to choose the minimum amount of sensor measurements that will provide the maximum amount of information. Additionally, each sensor measurement corresponds to a point on the wafer surface and therefore we must measure uniformly around the wafer surface as well. In order to solve this problem, we propose a sensor mark selection algorithm which exploits genetic algorithms. The proposed solution first selects a pool of points that qualify as candidates to be selected in order to meet the uniformity constraint. Then, the point that provides the maximum amount of information, quantified by the Fisher-based criteria of G-, D-, and A-optimality, is selected and added to the measurement scheme. This process, however, is considered "greedy", and for this reason, genetic algorithms (GA) are exploited to further improve the solution. By repeating in parallel the "greedy" part several times, we obtain an initial population that will be the input to our GA. This meta-heuristic approach outperforms the "greedy" approach significantly. The proposed solution is applied in a real life semiconductors industry use case and achieves interesting industry as well as academical results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Adaptive and robust experimental design for linear dynamical models using Kalman filter.
- Author
-
Strouwen, Arno, Nicolaï, Bart M., and Goos, Peter
- Subjects
EXPERIMENTAL design ,LINEAR dynamical systems ,KALMAN filtering ,FISHER information ,OPTIMAL designs (Statistics) ,DYNAMICAL systems - Abstract
Current experimental design techniques for dynamical systems often only incorporate measurement noise, while dynamical systems also involve process noise. To construct experimental designs we need to quantify their information content. The Fisher information matrix is a popular tool to do so. Calculating the Fisher information matrix for linear dynamical systems with both process and measurement noise involves estimating the uncertain dynamical states using a Kalman filter. The Fisher information matrix, however, depends on the true but unknown model parameters. In this paper we combine two methods to solve this issue and develop a robust experimental design methodology. First, Bayesian experimental design averages the Fisher information matrix over a prior distribution of possible model parameter values. Second, adaptive experimental design allows for this information to be updated as measurements are being gathered. This updated information is then used to adapt the remainder of the design. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Proof of Concept for Fast Equation of State Development Using an Integrated Experimental–Computational Approach.
- Author
-
Frotscher, Ophelia, Martinek, Viktor, Fingerhut, Robin, Yang, Xiaoxian, Vrabec, Jadran, Herzog, Roland, and Richter, Markus
- Subjects
- *
EQUATIONS of state , *THERMODYNAMICS , *MACHINE learning , *OPTIMAL designs (Statistics) , *PROOF of concept , *BINARY mixtures - Abstract
A multitude of industries, including energy and process engineering, as well as academia are researching and utilizing new fluid substances to further the aim of sustainability. Knowledge of the thermodynamic properties of these substances is a prerequisite, if they are to be utilized to their fullest potential. To date, the way to acquire reliable knowledge of the thermodynamic behavior is through measurements. The ensuing experimental data are then used to develop equations of state, which efficiently embody the gained knowledge of the behavior of the fluid substance, allow for interpolation and, to some extent, extrapolation. However, the acquisition of low-uncertainty experimental data, and thus the development of accurate equations of state, is often time-consuming and expensive. For substances for which suitable force field models exist, molecular modeling and simulation are well-suited to generate thermodynamic data or to augment experimental data, however, at the expense of larger uncertainties. The major goal of this work is to present a new approach for the development of equations of state using (1) symbolic regression, which is a machine learning based model development approach, (2) optimal experimental design, and (3) efficient data acquisition. We demonstrate this approach using the example of density data of an air-like binary mixture ( 0.2094 O 2 + 0.7906 N 2 ) over the temperature range from 100 K to 300 K at pressures of up to 8 MPa , which covers the gaseous, liquid, and supercritical regions. For this purpose, an experimental data set published by von Preetzmann et al. (Int. J. Thermophys. 42, 2021) and molecular simulation data sampled in this work are used. The two data sets are compared in terms of acquisition time, cost, and uncertainty, showing that an optimized combination of experimental and simulation data leads to lower cost while maintaining low uncertainties. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Source detection on graphs
- Author
-
Weber, Tobias, Kaibel, Volker, and Sager, Sebastian
- Published
- 2023
- Full Text
- View/download PDF
23. Optimal experimental design and estimation for q‐space trajectory imaging.
- Author
-
Morez, Jan, Szczepankiewicz, Filip, den Dekker, Arnold J., Vanhevel, Floris, Sijbers, Jan, and Jeurissen, Ben
- Subjects
- *
OPTIMAL designs (Statistics) , *DIFFUSION magnetic resonance imaging - Abstract
Tensor‐valued diffusion encoding facilitates data analysis by q‐space trajectory imaging. By modeling the diffusion signal of heterogeneous tissues with a diffusion tensor distribution (DTD) and modulating the encoding tensor shape, this novel approach allows disentangling variations in diffusivity from microscopic anisotropy, orientation dispersion, and mixtures of multiple isotropic diffusivities. To facilitate the estimation of the DTD parameters, a parsimonious acquisition scheme coupled with an accurate and precise estimation of the DTD is needed. In this work, we create two precision‐optimized acquisition schemes: one that maximizes the precision of the raw DTD parameters, and another that maximizes the precision of the scalar measures derived from the DTD. The improved precision of these schemes compared to a naïve sampling scheme is demonstrated in both simulations and real data. Furthermore, we show that the weighted linear least squares (WLLS) estimator that uses the squared reciprocal of the noisy signal as weights can be biased, whereas the iteratively WLLS estimator with the squared reciprocal of the predicted signal as weights outperforms the conventional unweighted linear LS and nonlinear LS estimators in terms of accuracy and precision. Finally, we show that the use of appropriate constraints can considerably increase the precision of the estimator with only a limited decrease in accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Large datasets, bias and model‐oriented optimal design of experiments.
- Author
-
Pesce, Elena, Porro, Francesco, and Riccomagno, Eva
- Subjects
- *
EXPERIMENTAL design , *OPTIMAL designs (Statistics) - Abstract
We review recent literature that proposes to adapt ideas from classical model based optimal design of experiments to problems of data selection of large datasets. Special attention is given to bias reduction and to protection against confounders. Some new results are presented. Theoretical and computational comparisons are made. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Sensitivity-Driven Experimental Design to Facilitate Control of Dynamical Systems.
- Author
-
Hart, Joseph, van Bloemen Waanders, Bart, Hood, Lisa, and Parish, Julie
- Subjects
- *
OPTIMAL designs (Statistics) , *DYNAMICAL systems , *HYPERSONIC planes , *EXPERIMENTAL design , *FLIGHT testing , *SENSITIVITY analysis - Abstract
Control of nonlinear dynamical systems is a complex and multifaceted process. Essential elements of many engineering systems include high-fidelity physics-based modeling, offline trajectory planning, feedback control design, and data acquisition strategies to reduce uncertainties. This article proposes an optimization-centric perspective which couples these elements in a cohesive framework. We introduce a novel use of hyper-differential sensitivity analysis to understand the sensitivity of feedback controllers to parametric uncertainty in physics-based models used for trajectory planning. These sensitivities provide a foundation to define an optimal experimental design which seeks to acquire data most relevant in reducing demand on the feedback controller. Our proposed framework is illustrated on the Zermelo navigation problem and a hypersonic trajectory control problem using data from NASA's X-43 hypersonic flight tests. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. On Fekete points for a real simplex.
- Author
-
Bos, Len
- Abstract
We survey what is known about Fekete points/optimal designs for a simplex in R d. Several new results are included. The notion of Fejér exponent for a set of interpolation points is introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. AN OFFLINE-ONLINE DECOMPOSITION METHOD FOR EFFICIENT LINEAR BAYESIAN GOAL-ORIENTED OPTIMAL EXPERIMENTAL DESIGN: APPLICATION TO OPTIMAL SENSOR PLACEMENT.
- Author
-
KEYI WU, PENG CHEN, and GHATTAS, OMAR
- Subjects
- *
OPTIMAL designs (Statistics) , *SENSOR placement , *GOAL (Psychology) , *DECOMPOSITION method , *INVERSE problems , *ONLINE algorithms - Abstract
Bayesian optimal experimental design (OED) plays an important role in minimizing model uncertainty with limited experimental data in a Bayesian framework. In many applications, rather than minimizing the uncertainty in the inference of model parameters, one seeks to minimize the uncertainty of a model-dependent quantity of interest (QoI). This is known as goal-oriented OED (GOOED). Here, we consider GOOED for linear Bayesian inverse problems governed by large-scale models represented by partial differential equations (PDE) that are computationally expensive to solve. In particular, we consider optimal sensor placement by maximizing an expected information gain (EIG) for the QoI. We develop an efficient method to solve such problems by deriving a new formulation of the goal-oriented EIG. Based on this formulation we propose an offline-online decomposition scheme that achieves significant computational reduction by computing all of the PDE-dependent quantities in an offline stage just once, and optimizing the sensor locations in an online stage without solving any PDEs. Moreover, in the offline stage we need only to compute low-rank approximations of two Hessian-related operators. The computational cost of these lowrank approximations, measured by the number of PDE solves, does not depend on the parameter or data dimensions for a large class of elliptic, parabolic, and sufficiently dissipative hyperbolic inverse problem that exhibit dimension-independent rapid spectra decay. We carry out detailed error analysis for the approximate goal-oriented EIG due to the low-rank approximations of the two operators. Furthermore, in the online stage we extend a swapping greedy method to optimize the sensor locations developed in our recent work that is demonstrated to be more efficient than a standard greedy method. We conduct a numerical experiment for a contaminant transport inverse problem with an infinite-dimensional parameter field to demonstrate the efficiency, accuracy, and both dataand parameter-dimension independence of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Iterative model-based optimal experimental design for mixture-process variable models to predict solubility.
- Author
-
Quilló, Gustavo Lunardon, Bhonsale, Satyajeet, Collas, Alain, Xiouras, Christos, and Van Impe, Jan F.M.
- Subjects
- *
OPTIMAL designs (Statistics) , *SOLUBILITY , *DESIGN techniques - Abstract
Crystallization process design relies heavily on predictive solubility models. However, their calibration is resource- and labour-intensive, especially for multicomponent solvent mixtures at different process temperatures. Additionally, solubility data collection often occurs in a constrained design space due to e.g., polymorphism and solvent miscibility limitations. Optimal experimental design techniques enable the efficient use of resources by specifying a (minimum) number of maximally informative experiments focused on improving a statistical criterion for a given model structure in a constrained design space. This work generates D-, A- and I-optimal experimental designs for the commonly applied Van't-Hoff Jouyban-Acree (VH-JA) solubility regression model, in which it is demonstrated that I-optimal designs reduce the experimental burden for model calibration by approximately 25 % as compared to a typical screening dataset. Alternatively, existing datasets can be augmented to significantly improve model prediction power. The suggested workflow is applied to two case studies: itraconazole in tetrahydrofuran-water and mesalazine in ethanol-polyethylene glycol-water. The screening datasets of 72 and 212 runs were augmented with 16 additional experiments, resulting in a 33 % and 67 % reduction in the corresponding model prediction variance, respectively, which translates to improved model reliability at unprobed conditions. [Display omitted] • I-optimal designs carry potential to reduce number of experiments by roughly 25 %. • I-optimal designs offered 31 % lower variance of prediction than D-/A- optimal designs. • Solubility experimentation efforts should be divided in screening and optimal designs. • Optimal augmentation of an existing solubility dataset is straightforward. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Use a sequential gradient-enhanced-Kriging optimal experimental design method to build high-precision approximate model for complex simulation problem.
- Author
-
Li, Yaohui, Shi, Junjun, and Shen, Jingfang
- Abstract
The surrogate model based on Kriging has been widely used to approximate simulation problems of expensive computing. Although the accuracy of the gradient enhanced Kriging (GEK) is often higher than that of ordinary Kriging, designers cannot avoid more time consuming during gradient calculation of GEK. To this end, a sequential gradient-enhanced-Kriging optimal experimental design method with the Gaussian correlation function (GCF) is investigated to approximate complex black-box simulation problems by introducing gradient information of Kriging parameters. Due to the differentiable GCF, the gradient information can be simply evaluated. This characteristic make the proposed method effectively improve the modeling accuracy and efficiency of GEK. As expected, the test results from benchmark functions and the cycloid gear pump simulation show the feasibility, stability and applicability of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. On the Information Obtainable from Comparative Judgments.
- Author
-
Bürkner, Paul-Christian
- Subjects
LEGAL judgments ,PERSONALITY tests ,RICE diseases & pests ,OPTIMAL designs (Statistics) ,TEST design - Abstract
Personality tests employing comparative judgments have been proposed as an alternative to Likert-type rating scales. One of the main advantages of a comparative format is that it can reduce faking of responses in high-stakes situations. However, previous research has shown that it is highly difficult to obtain trait score estimates that are both faking resistant and sufficiently accurate for individual-level diagnostic decisions. With the goal of contributing to a solution, I study the information obtainable from comparative judgments analyzed by means of Thurstonian IRT models. First, I extend the mathematical theory of ordinal comparative judgments and corresponding models. Second, I provide optimal test designs for Thurstonian IRT models that maximize the accuracy of people's trait score estimates from both frequentist and Bayesian statistical perspectives. Third, I derive analytic upper bounds for the accuracy of these trait estimates achievable through ordinal Thurstonian IRT models. Fourth, I perform numerical experiments that complement results obtained in earlier simulation studies. The combined analytical and numerical results suggest that it is indeed possible to design personality tests using comparative judgments that yield trait scores estimates sufficiently accurate for individual-level diagnostic decisions, while reducing faking in high-stakes situations. Recommendations for the practical application of comparative judgments for the measurement of personality, specifically in high-stakes situations, are given. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. To shift or to rotate? Comparison of acquisition strategies for multi-slice super-resolution magnetic resonance imaging.
- Author
-
Nicastro, Michele, Jeurissen, Ben, Beirinckx, Quinten, Smekens, Céline, Poot, Dirk H. J., Sijbers, Jan, and den Dekker, Arnold J.
- Subjects
MAGNETIC resonance imaging ,MONTE Carlo method ,OPTIMAL designs (Statistics) ,SIGNAL-to-noise ratio - Abstract
Multi-slice (MS) super-resolution reconstruction (SRR) methods have been proposed to improve the trade-off between resolution, signal-to-noise ratio and scan time in magnetic resonance imaging. MS-SRR consists in the estimation of an isotropic high-resolution image from a series of anisotropic MS images with a low through-plane resolution, where the anisotropic low-resolution images can be acquired according to different acquisition schemes. However, it is yet unclear how these schemes compare in terms of statistical performance criteria, especially for regularized MS-SRR. In this work, the estimation performance of two commonly adopted MS-SRR acquisition schemes based on shifted and rotated MS images respectively are evaluated in a Bayesian framework. The maximum a posteriori estimator, which introduces regularization by incorporating prior knowledge in a statistically well-defined way, is put forward as the estimator of choice and its accuracy, precision, and Bayesian mean squared error (BMSE) are used as performance criteria. Analytic calculations as well as Monte Carlo simulation experiments show that the rotated scheme outperforms the shifted scheme in terms of precision, accuracy, and BMSE. Furthermore, the superior performance of the rotated scheme is confirmed in real data experiments and in retrospective simulation experiments with and without inter-image motion. Results show that the rotated scheme allows regularized MS-SRR with a higher accuracy and precision than the shifted scheme, besides being more resilient to motion. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. A Genetic Algorithm-Enhanced Sensor Marks Selection Algorithm for Wavefront Aberration Modeling in Extreme-UV (EUV) Photolithography
- Author
-
Aris Magklaras, Panayiotis Alefragis, Christos Gogos, Christos Valouxis, and Alexios Birbas
- Subjects
photolithography ,optimal design of experiments ,optimal experimental design ,G-optimal ,D-optimal ,A-optimal ,Information technology ,T58.5-58.64 - Abstract
In photolithographic processes, nanometer-level-precision wavefront-aberration models enable the machine to be able to meet the overlay (OVL) drift and critical dimension (CD) specifications. Software control algorithms take as input these models and correct any expected wavefront imperfections before reaching the wafer. In such way, a near-optimal image is exposed on the wafer surface. Optimizing the parameters of these models, however, involves several time costly sensor measurements which reduce the throughput performance of the machine in terms of exposed wafers per hour. In that case, photolithography machines come across the trade-off between throughput and quality. Therefore, one of the most common optimal experimental design (OED) problems in photolithography machines (and not only) is how to choose the minimum amount of sensor measurements that will provide the maximum amount of information. Additionally, each sensor measurement corresponds to a point on the wafer surface and therefore we must measure uniformly around the wafer surface as well. In order to solve this problem, we propose a sensor mark selection algorithm which exploits genetic algorithms. The proposed solution first selects a pool of points that qualify as candidates to be selected in order to meet the uniformity constraint. Then, the point that provides the maximum amount of information, quantified by the Fisher-based criteria of G-, D-, and A-optimality, is selected and added to the measurement scheme. This process, however, is considered “greedy”, and for this reason, genetic algorithms (GA) are exploited to further improve the solution. By repeating in parallel the “greedy” part several times, we obtain an initial population that will be the input to our GA. This meta-heuristic approach outperforms the “greedy” approach significantly. The proposed solution is applied in a real life semiconductors industry use case and achieves interesting industry as well as academical results.
- Published
- 2023
- Full Text
- View/download PDF
33. Parameter Individual Optimal Experimental Design and Calibration of Parametric Models
- Author
-
Nicolai Palm, Florian Stroebl, and Herbert Palm
- Subjects
Parametric models ,parameter estimation ,design of experiments ,optimal experimental design ,battery aging ,computer experiment ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Parametric models allow to reflect system behavior in general and characterize individual system instances by specific parameter values. For a variety of scientific disciplines, model calibration by parameter quantification is therefore of central importance. As the time and cost of calibration experiments increases, the question of how to determine parameter values of required quality with a minimum number of experiments comes to the fore. In this paper, a methodology is introduced allowing to quantify and optimize achievable parameter extraction quality based on an experimental plan including a process and methods how to adapt the experimental plan for improved estimation of individually selectable parameters. The resulting parameter-individual optimal design of experiments (pi-OED) enables experimenters to extract a maximum of parameter-specific information from a given number of experiments. We demonstrate how to minimize variance or covariances of individually selectable parameter estimators by model-based calculation of the experimental designs. Using the Fisher Information Matrix in combination with the Cramer-Raó inequality, the pi-OED plan is reduced to a global optimization problem. The pi-OED workflow is demonstrated using computer experiments to calibrate a model describing calendrical aging of lithium-ion battery cells. Applying bootstrapping methods allows to also quantify parameter estimation distributions for further benchmarking. Comparing pi-OED based computer experimental results with those based on state-of-the-art designs of experiments, reveals its efficiency improvement. All computer experimental results are gained in Python and may be reproduced using a provided Jupyter Notebook along with the source code. Both are available under https://github.com/nicolaipalm/oed.
- Published
- 2022
- Full Text
- View/download PDF
34. Quality of Prediction for Spatiotemporal Covariance Models
- Author
-
Waldl, Helmut, Förstner, Ulrich, Series Editor, Rulkens, Wim H., Series Editor, Salomons, Wim, Series Editor, Ksibi, Mohamed, editor, Ghorbal, Achraf, editor, Chakraborty, Sudip, editor, Chaminé, Helder I., editor, Barbieri, Maurizio, editor, Guerriero, Giulia, editor, Hentati, Olfa, editor, Negm, Abdelazim, editor, Lehmann, Anthony, editor, Römbke, Jörg, editor, Costa Duarte, Armando, editor, Xoplaki, Elena, editor, Khélifi, Nabil, editor, Colinet, Gilles, editor, Miguel Dias, João, editor, Gargouri, Imed, editor, Van Hullebusch, Eric D., editor, Sánchez Cabrero, Benigno, editor, Ferlisi, Settimio, editor, Tizaoui, Chedly, editor, Kallel, Amjad, editor, Rtimi, Sami, editor, Panda, Sandeep, editor, Michaud, Philippe, editor, Sahu, Jaya Narayana, editor, Seffen, Mongi, editor, and Naddeo, Vincenzo, editor
- Published
- 2021
- Full Text
- View/download PDF
35. Optimal Experimental Design Methods for Acquiring and Restricting Information to Improve Decision Making
- Author
-
Walsh, Sarah E., Sealy, William, Feigh, Karen M., Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Ayaz, Hasan, editor, and Asgher, Umer, editor
- Published
- 2021
- Full Text
- View/download PDF
36. Optimisation of geotechnical surveys using a BIM-based geostatistical analysis
- Author
-
Mahmoudi, Elham, Stepien, Marcel, and König, Markus
- Published
- 2021
- Full Text
- View/download PDF
37. To shift or to rotate? Comparison of acquisition strategies for multi-slice super-resolution magnetic resonance imaging
- Author
-
Michele Nicastro, Ben Jeurissen, Quinten Beirinckx, Céline Smekens, Dirk H. J. Poot, Jan Sijbers, and Arnold J. den Dekker
- Subjects
magnetic resonance imaging ,super-resolution ,optimal experimental design ,Bayesian estimation ,image reconstruction ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Multi-slice (MS) super-resolution reconstruction (SRR) methods have been proposed to improve the trade-off between resolution, signal-to-noise ratio and scan time in magnetic resonance imaging. MS-SRR consists in the estimation of an isotropic high-resolution image from a series of anisotropic MS images with a low through-plane resolution, where the anisotropic low-resolution images can be acquired according to different acquisition schemes. However, it is yet unclear how these schemes compare in terms of statistical performance criteria, especially for regularized MS-SRR. In this work, the estimation performance of two commonly adopted MS-SRR acquisition schemes based on shifted and rotated MS images respectively are evaluated in a Bayesian framework. The maximum a posteriori estimator, which introduces regularization by incorporating prior knowledge in a statistically well-defined way, is put forward as the estimator of choice and its accuracy, precision, and Bayesian mean squared error (BMSE) are used as performance criteria. Analytic calculations as well as Monte Carlo simulation experiments show that the rotated scheme outperforms the shifted scheme in terms of precision, accuracy, and BMSE. Furthermore, the superior performance of the rotated scheme is confirmed in real data experiments and in retrospective simulation experiments with and without inter-image motion. Results show that the rotated scheme allows regularized MS-SRR with a higher accuracy and precision than the shifted scheme, besides being more resilient to motion.
- Published
- 2022
- Full Text
- View/download PDF
38. Optimal criteria and their asymptotic form for data selection in data-driven reduced-order modelling with Gaussian process regression.
- Author
-
Sapsis, Themistoklis P. and Blanchard, Antoine
- Subjects
- *
KRIGING , *REDUCED-order models , *GAUSSIAN processes , *OPTIMAL designs (Statistics) , *PROBABILITY density function , *SUPERVISED learning - Abstract
We derive criteria for the selection of datapoints used for data-driven reduced-order modelling and other areas of supervised learning based on Gaussian process regression (GPR). While this is a well-studied area in the fields of active learning and optimal experimental design, most criteria in the literature are empirical. Here we introduce an optimality condition for the selection of a new input defined as the minimizer of the distance between the approximated output probability density function (pdf) of the reduced-order model and the exact one. Given that the exact pdf is unknown, we define the selection criterion as the supremum over the unit sphere of the native Hilbert space for the GPR. The resulting selection criterion, however, has a form that is difficult to compute. We combine results from GPR theory and asymptotic analysis to derive a computable form of the defined optimality criterion that is valid in the limit of small predictive variance. The derived asymptotic form of the selection criterion leads to convergence of the GPR model that guarantees a balanced distribution of data resources between probable and large-deviation outputs, resulting in an effective way of sampling towards data-driven reduced-order modelling. This article is part of the theme issue 'Data-driven prediction in dynamical systems'. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Optimizing Thermoacoustic Characterization Experiments for Identifiability Improves Both Parameter Estimation Accuracy and Closed-Loop Controller Robustness Guarantees.
- Author
-
Chen, Xiaoling, O'Connor, Jacqueline, and Fathy, Hosam
- Subjects
PARAMETER estimation ,HEAT release rates ,OPTIMAL designs (Statistics) ,ACOUSTIC excitation ,ROBUST control ,ADAPTIVE fuzzy control - Abstract
This article examines the degree to which optimizing a Rijke tube experiment can improve the accuracy of thermoacoustic model parameter estimation, thereby facilitating robust stability control. We use a one-dimensional thermoacoustic model to describe the combustion dynamics in a Rijke tube. This model contains two unknown parameters that relate velocity perturbations to heat release rate oscillations, namely, a time delay τ and amplification factor β. The parameters are estimated from experiments where the system input is the acoustic excitation from a loudspeaker and the output is the pressure response captured by a microphone. Our work is grounded in the insight that optimizing an experiment's design for higher Fisher identifiability leads to more accurate parameter estimates. The novel goal of this paper is to apply this insight in the laboratory using a flame-driven Rijke tube setup. For comparison purposes, we conduct a benchmark experiment with a broadband chirp signal as the excitation input. Next, we excite the Rijke tube at two frequencies optimized for Fisher identifiability. Repeats of both experiments show that the optimal experiment achieves parameter estimates with uncertainties at least one order of magnitude smaller than the benchmark. With smaller parameter estimate uncertainties, an LQG controller designed to attenuate combustion instabilities is able to achieve stronger robustness guarantees, quantified in terms of closed-loop structured singular values that account for parameter estimation uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Improved Particle Swarm Optimization Algorithms for Optimal Designs with Various Decision Criteria.
- Author
-
Li, Chang and Coster, Daniel C.
- Subjects
- *
PARTICLE swarm optimization , *MATHEMATICAL optimization , *OPTIMAL designs (Statistics) , *GENETIC algorithms , *DECISION making - Abstract
Particle swarm optimization (PSO) is an attractive, easily implemented method which is successfully used across a wide range of applications. In this paper, utilizing the core ideology of genetic algorithm and dynamic parameters, an improved particle swarm optimization algorithm is proposed. Then, based on the improved algorithm, combining the PSO algorithm with decision making, nested PSO algorithms with two useful decision making criteria (optimistic coefficient criterion and minimax regret criterion) are proposed. The improved PSO algorithm is implemented on two unimodal functions and two multimodal functions, and the results are much better than that of the traditional PSO algorithm. The nested algorithms are applied on the Michaelis–Menten model and two parameter logistic regression model as examples. For the Michaelis–Menten model, the particles converge to the best solution after 50 iterations. For the two parameter logistic regression model, the optimality of algorithms are verified by the equivalence theorem. More results for other models applying our algorithms are available upon request. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. OPTIMAL EXPERIMENTAL DESIGN FOR INVERSE PROBLEMS IN THE PRESENCE OF OBSERVATION CORRELATIONS.
- Author
-
ATTIA, AHMED and CONSTANTINESCU, EMIL
- Subjects
- *
OPTIMAL designs (Statistics) , *INVERSE problems , *NUMERICAL weather forecasting , *HADAMARD matrices , *SENSOR placement - Abstract
Optimal experimental design (OED) is the general formalism of sensor placement and decisions about the data collection strategy for engineered or natural experiments. This approach is prevalent in many critical fields such as battery design, numerical weather prediction, geosciences, and environmental and urban studies. State-of-the-art computational methods for experimental design, however, do not accommodate correlation structure in observational errors produced by many expensive-to-operate devices such as X-ray machines or radar and satellite retrievals. Discarding evident data correlations leads to biased results, poor data collection decisions, and waste of valuable resources. We present a general formulation of the OED formalism for model-constrained large-scale Bayesian linear inverse problems, where measurement errors are generally correlated. The proposed approach utilizes the Hadamard product of matrices to formulate the weighted likelihood and is valid for both finite- and infinite-dimensional Bayesian inverse problems. Extensive numerical experiments are carried out for empirical verification of the proposed approach by using an advection-diffusion model, where the objective is to optimally place a small set of sensors, under a limited budget, to predict the concentration of a contaminant in a closed and bounded domain. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Optimal Exposure Time in Gamma-Ray Attenuation Experiments for Monitoring Time-Dependent Densities.
- Author
-
Gonzalez-Nicolas, Ana, Bilgic, Deborah, Kröker, Ilja, Mayar, Assem, Trevisan, Luca, Steeb, Holger, Wieprecht, Silke, and Nowak, Wolfgang
- Subjects
PHOTON counting ,OPTIMAL designs (Statistics) ,BEER-Lambert law ,PHOTON detectors ,GAMMA rays ,MULTIPHASE flow - Abstract
Several environmental phenomena require monitoring time-dependent densities in porous media, e.g., clogging of river sediments, mineral dissolution/precipitation, or variably-saturated multiphase flow. Gamma-ray attenuation (GRA) can monitor time-dependent densities without being destructive or invasive under laboratory conditions. GRA sends gamma rays through a material, where they are attenuated by photoelectric absorption and then recorded by a photon detector. The attenuated intensity of the emerging beam relates to the density of the traversed material via Beer–Lambert's law. An important parameter for designing time-variable GRA is the exposure time, the time the detector takes to gather and count photons before converting the recorded intensity to a density. Large exposure times capture the time evolution poorly (temporal raster error, inaccurate temporal discretization), while small exposure times yield imprecise intensity values (noise-related error, i.e. small signal-to-noise ratio). Together, these two make up the total error of observing time-dependent densities by GRA. Our goal is to provide an optimization framework for time-dependent GRA experiments with respect to exposure time and other key parameters, thus facilitating neater experimental data for improved process understanding. Experimentalists set, or iterate over, several experimental input parameters (e.g., Beer–Lambert parameters) and expectations on the yet unknown dynamics (e.g., mean and amplitude of density and characteristic time of density changes). We model the yet unknown dynamics as a random Gaussian Process to derive expressions for expected errors prior to the experiment as a function of key experimental parameters. Based on this, we provide an optimization framework that allows finding the optimal (minimal-total-error) setup and demonstrate its application on synthetic experiments. Article Highlights: We study the influence of anticipated density changes and experimental setup on optimal designs for GRA measurements We present a methodology that finds the optimal setup (minimum error) as a function of the exposure time and other parameters We provide experimentalists with a quantitative understanding of unavoidable inaccuracies and how to minimize them [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Active Search Methods to Predict Material Failure Under Intermittent Loading in the Serebrinksy-Ortiz Fatigue Model
- Author
-
Guth, Stephen, Sapis, Themistoklis, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Darema, Frederica, editor, Blasch, Erik, editor, Ravela, Sai, editor, and Aved, Alex, editor
- Published
- 2020
- Full Text
- View/download PDF
44. Effect of Objective Function on Data-Driven Greedy Sparse Sensor Optimization
- Author
-
Kumi Nakai, Keigo Yamada, Takayuki Nagata, Yuji Saito, and Taku Nonomura
- Subjects
Data-driven ,sparse sensor optimization ,greedy method ,optimal experimental design ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The problem of selecting an optimal set of sensors estimating a high-dimensional data is considered. Objective functions based on D-, A-, and E-optimality criteria of optimal design are adopted to greedy methods, that maximize the determinant, minimize the trace of the inverse, and maximize the minimum eigenvalue of the Fisher information matrix, respectively. First, the Fisher information matrix is derived depending on the numbers of latent state variables and sensors. Then, a unified formulation of the objective function based on A-optimality is introduced and proved to be submodular, which provides the lower bound on the performance of the greedy method. Next, greedy methods based on D-, A-, and E-optimality are applied to randomly generated systems and a practical dataset concerning the global climate; these correspond to an almost ideal and a practical case in terms of statistics, respectively. The D- and A-optimality-based greedy methods select better sensors. The E-optimality-based greedy method does not select better sensors in terms of the index of E-optimality in the oversample case, while the A-optimality-based greedy method unexpectedly does so in terms of the index of E-optimality. The poor performance of the E-optimality-based greedy method is due to the lack of submodularity in the E-optimality index and the better performance of the A-optimality-based greedy method is due to the relation between A- and E-optimality. Indices of D- and A-optimality seem to be important in the ideal case where the statistics for the system are well known, and therefore, the D- and A-optimality-based greedy methods are suitable for accurate reconstruction. On the other hand, the index of E-optimality seems to be critical in the more practical case where the statistics for the system are not well known, and therefore, the A-optimality-based greedy method performs best because of its superiority in terms of the index of E-optimality.
- Published
- 2021
- Full Text
- View/download PDF
45. Variational Methods for Optimal Experimental Design
- Author
-
Kennamer, Noble William
- Subjects
Artificial intelligence ,Statistics ,Active Learning ,Generative Modeling ,Machine Learning ,Optimal Experimental Design ,Variational Methods - Abstract
In this work we study variational methods for Bayesian optimal experimental design (BOED). Experimentation is a cornerstone of science and is central to any major engineering effort. Often experiments require the use of substantial resources, from expensive equipment to limited researcher time; in addition, experiments can be dangerous or may be required to be completed in a given period of time. For these reasons, we prefer to conduct our experiments as efficiently as possible, acquiring as much information as we can given the resources available to us. Optimal experimental design (OED) is a sub-field of statistics focused on developing methods for accomplishing this goal. The OED problem is formulated by defining a utility function over designs and optimizing this function over the set of all feasible designs. We focus on the \emph{Expected Information Gain} (EIG), a widely used utility function with sound theoretical support. However, in practice the EIG is intractable to compute, and approximation strategies are required. We investigate the use of variational methods for this purpose and show substantial improvement over competing approximation techniques. A specific form of OED common in the field of machine learning (ML) is \emph{active learning} (AL). In the active learning framework, we would like to obtain a labeled dataset in order to train a supervised model. However, for all the reasons stated, labeling data points can be costly and again we should make efficient use of our labeling resources. We present a novel application of active learning to optimize spectroscopic follow up for large scale astronomical surveys. Finally, much of this work requires learning functions over sets which we know must satisfy certain properties (e.g., permutation invariance). We conclude the thesis by presenting a novel neural network architecture for predicting the astronomical class of individual objects in the same exposure using a neural architecture specifically designed to accommodate known inductive biases present in the data.
- Published
- 2022
46. A criterion and incremental design construction for simultaneous kriging predictions.
- Author
-
Waldl, Helmut, Müller, Werner G., and Trandafir, Paula Camelia
- Abstract
In this paper, we further investigate the problem of selecting a set of design points for universal kriging, which is a widely used technique for spatial data analysis. Our goal is to select the design points in order to make simultaneous predictions of the random variable of interest at a finite number of unsampled locations with maximum precision. Specifically, we consider as response a correlated random field given by a linear model with an unknown parameter vector and a spatial error correlation structure. We propose a new design criterion that aims at simultaneously minimizing the variation of the prediction errors at various points. We also present various efficient techniques for incrementally building designs for that criterion scaling well for high dimensions. Thus the method is particularly suitable for big data applications in areas of spatial data analysis such as mining, hydrogeology, natural resource monitoring, and environmental sciences or equivalently for any computer simulation experiments. We have demonstrated the effectiveness of the proposed designs through two illustrative examples: one by simulation and another based on real data from Upper Austria. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Examination of optimized protocols for pCASL: Sensitivity to macrovascular contamination, flow dispersion, and prolonged arterial transit time.
- Author
-
Zhang, Logan X., Woods, Joseph G., Okell, Thomas W., and Chappell, Michael A.
- Subjects
CEREBRAL circulation ,SPIN labels ,BLOOD volume ,DISPERSION (Chemistry) - Abstract
Purpose: Previously, multi‐ post‐labeling delays (PLD) pseudo‐continuous arterial spin labeling (pCASL) protocols have been optimized for the estimation accuracy of the cerebral blood flow (CBF) with/without the arterial transit time (ATT) under a standard kinetic model and a normal ATT range. This study aims to examine the estimation errors of these protocols under the effects of macrovascular contamination, flow dispersion, and prolonged arrival times, all of which might differ substantially in elderly or pathological groups. Methods: Simulated data for four protocols with varying degrees of arterial blood volume (aBV), flow dispersion, and ATTs were fitted with different kinetic models, both with and without explicit correction for macrovascular signal contamination (MVC), to obtain CBF and ATT estimates. Sensitivity to MVC was defined and calculated when aBV > 0.5%. A previously acquired dataset was retrospectively analyzed to compare with simulation. Results: All protocols showed underestimation of CBF and ATT in the prolonged ATT range. With MVC, the protocol optimized for CBF only (CBFopt) had the lowest sensitivity value to MVC, 33.47% and 60.21% error per 1% aBV in simulation and in vivo, respectively, among multi‐PLD protocols. All multi‐PLD protocols showed a significant decrease in estimation error when an extended kinetic model was used. Increasing flow dispersion at short ATTs caused increasing CBF and ATT overestimation in all protocols. Conclusion: CBFopt was the least sensitive protocol to prolonged ATT and MVC for CBF estimation while maintaining reasonably good performance in estimating ATT. Explicitly including a macrovascular component in the kinetic model was shown to be a feasible approach in controlling for MVC. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Frame-Based Optimal Design
- Author
-
Mair, Sebastian, Rudolph, Yannick, Closius, Vanessa, Brefeld, Ulf, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Berlingerio, Michele, editor, Bonchi, Francesco, editor, Gärtner, Thomas, editor, Hurley, Neil, editor, and Ifrim, Georgiana, editor
- Published
- 2019
- Full Text
- View/download PDF
49. Optimal Experimental Design
- Author
-
Jaeger, Dieter, editor and Jung, Ranu, editor
- Published
- 2022
- Full Text
- View/download PDF
50. Predators' Functional Response: Statistical Inference, Experimental Design, and Biological Interpretation of the Handling Time
- Author
-
Nikos E. Papanikolaou, Theodore Kypraios, Hayden Moffat, Argyro Fantinou, Dionysios P. Perdikis, and Christopher Drovandi
- Subjects
Bayesian inference ,optimal experimental design ,handling time ,disc equation ,mechanistic understanding ,Evolution ,QH359-425 ,Ecology ,QH540-549.5 - Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.